Skip to content

fix: jieba 分词使用全模式#2813

Merged
shaohuzhang1 merged 1 commit intomainfrom
pr@main@fix_lcut
Apr 7, 2025
Merged

fix: jieba 分词使用全模式#2813
shaohuzhang1 merged 1 commit intomainfrom
pr@main@fix_lcut

Conversation

@shaohuzhang1
Copy link
Copy Markdown
Contributor

fix: jieba 分词使用全模式

@f2c-ci-robot
Copy link
Copy Markdown

f2c-ci-robot bot commented Apr 7, 2025

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@f2c-ci-robot
Copy link
Copy Markdown

f2c-ci-robot bot commented Apr 7, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@shaohuzhang1 shaohuzhang1 merged commit 5e43bb9 into main Apr 7, 2025
4 checks passed
@shaohuzhang1 shaohuzhang1 deleted the pr@main@fix_lcut branch April 7, 2025 10:51
extract_tags = jieba.lcut(text)
extract_tags = jieba.lcut(text, cut_all=True)
result = " ".join(extract_tags)
return result
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code looks generally correct, but it can be optimized and clarified slightly:

  1. Function Naming:

    • The function to_ts_vector is not necessary if you are using the same logic in to_query.
    • You might want to rename one of these functions to avoid redundancy.
  2. Regular Expressions:

    • In both to_ts_vector and to_query, the default behavior of jieba.lcut (or Chinese word segmentation) involves splitting text into words without considering punctuation or other delimiters. This means that numbers, symbols, etc., will also be included in the tokenization unless they are part of a Chinese character sequence.

    If you specifically need to keep digits out during segmenting, consider using jieba.lcut_for_search() which splits on spaces and considers punctuation as separate tokens:

    from jieba import lcut_for_search
    
    def to_query(text: str):
        extract_tags = lcut_for_search(text)
        result = " ".join(extract_tags)
        return result
  3. Code Readability:

    • It's generally better practice to use descriptive variable names instead of single-letter variables like result.

Here’s an updated version with some of these suggestions applied:

def get_key_by_word_dict(key, word_dict):
    # Your existing implementation here

def process_text_for_segmentation(text: str):
    return " ".join(jieba.lcut_for_search(text))

def to_query(text: str):
    """Convert text into query format."""
    processed_text = process_text_for_segmentation(text)
    return processed_text

These changes make the code more readable and maintainable while improving its functionality according to specific requirements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant